Introduction

Overview and motivation

Video surveillance (CCTV) is a technology that is nowadays deeply woven into the everyday life of many people as one tends to expect it in many varied circumstances (Ossola, 2019). The rationale behind the installation of these systems seems to be very clear for governments. For example, on Buffalo’s (NY) open data website, one can read that “the City of Buffalo deploys a real-time, citywide video surveillance system to augment the public safety efforts of the Buffalo Police Department”. Yet, the development of this new technology, is not exempt from any controversy. For instance, many observers claim that the expansion of video surveillance poses an unregulated threat to privacy (ACLU, 2021). Still, many people seem to be willing to accept this loss in privacy as the surge in video surveillance makes them feel safer (Madden & Rainie, 2015).

Throughout this research, we challenge the widespread belief that people who have “nothing to hide” should be content with the expansion of CCTV networks as the latter makes them safer (Madden & Rainie, 2015). Indeed, on top of many privacy issues linked with this surge in video surveillance systems, one might legitimately ask the question whether these cameras actually make people safer?

The goal of this project in the first phase is to investigate the crime deterrent potential of CCTVs in an Amercian city. This potential will also be compared to the different types of crime that are committed in this area. Over a second phase, the dispersion of CCTVs within the city will be investigated. Indeed, according to some researches, mass surveillance has a stronger impact on communities already disadvantaged by their poverty, race, religion, ethnicity, or immigration status (Gellman & Adler-Bell, 2017). We would like to see whether our data enables us to validate or invalidate this theory. It would also be extremely interesting, even though challenging, to see whether the installation of surveillance systems could potentially create even more pernicious issues such as crime displacements (Waples, Gill & Fisher, 2009).

In sum we argue that, in a world where CCTVs and other surveillance systems are flourishing, it might be beneficial to take a step back and question both the efficacy and the implementation design of such technologies, since they are often portrayed by different stakeholders as miraculous solutions to very complex issues.

Backgrounds

Augustin: Augustin obtained a degree in Business Administration at the University of St-Gallen where he had the opportunity to develop a strong interest in digital business ethics. He wrote his bachelor’s thesis on the privacy implications of the use of fear appeals in home surveillance devices’ marketing strategy.

Marine: Marine made a bachelor in Law at the UBO (Université de Bretagne-Occidentale). She is presently into the Master DCS (Droit, Criminalité et Sécurité des technologies de l’information) at the Unversity of Lausanne. Last year, she had the opportunity to take a data protection course and learn more about cyber security and crime in general.

Daniel: Daniel is an exchange student from Koblenz, Germany. Daniel obtained a bachelor’s degree in Business Administration/Management at the WHU - Otto Beisheim School of Management, Germany. He is currently pursuing a Master of Management with focusing on family businesses, entrepreneurship and data science in his courses. Interestingly regarding this project, Daniel spend several months in the United states after high school and thus he can relate to the topic about police violence and crimes in the US.

Motivations

Firstly, from our respective backgrounds, we derive a strong interest in new technologies and privacy. We believe that every person is entitled to the fundamental right to privacy. Unfortunately, one observes an increasing tendency of governments and other stakeholders (e.g. businesses such as GAFA (Google, Amazon, Facebook, Apple)) to take more and more control in our daily lives through digital technologies such as cameras, computers or smartphones. For these reasons it is interesting to ask ourselves if this massive collection of our data leads to more security or more restrictions of our freedom.

Secondly, if we look at European law like the GDPR, collection and processing of our data must be proportionate to the purpose of that processing. Therefore, it is of our interest to determine if these applications are the same in the United States and to see if the installation of cameras, with the objective of security, really allows to reduce crime and to make a city more secure.

Thirdly, it must also be said that crime and the legislative discussions regarding the right to wear a gun in the United-States are fascinating. At first, it seems as if the freedom to carry a gun makes the US more prone to crimes such as mass shootings. To verify or falsify our hypotheses, we also want to see through the datasets we obtained, what kind of crime prevails in American cities and how it evolves according to the districts and their particularities.

Research questions

  1. Does the presence of CCTVs in a given area actually deter crime?
  2. What types of crimes may be deterred by surveillance cameras?
  3. Is the impact of CCTV installation on crime reduction higher/lower/same in higher income neighborhoods compared to lower income neighborhoods?
  4. Are there more public cameras in lower income/higher unemployment areas compared to higher income/employment areas? (Does the government respect privacy issues depending on your income level?)
  5. Do we observe crime displacement issues caused by the installation of CCTV in some neighbourhoods?

Data

Data source

We have four raw data sets. All data sets were retrieved on baltimore government open data portal. We found data about crimes committed in Baltimore, CCTV location in the city and poverty rates. We also found a data set showing the reference boundaries of the Community Statistical Area geographies. The latter will certainly be helpful to match each data set’s observations together.

Raw Data sets

2.1 Crime Data set

This dataset represents the location and characteristics of major crime against persons such as homicide, shooting, robbery, aggrevated assault etc. within the City of Baltimore. This dataset contains 350’294 observations.

  • RowID = ID of the row, 350’294 in total

  • CrimeDateTime = date and time of the crime. Format yyyy/mm/dd hh:mm:sstzd

  • CrimeCode = Code corresponding to the type of crime committed

  • Location = Textual information on where the crime was committed

  • Description = Textual description of the crime committed corresponding to a CrimeCode.

  • Inside/Outside = Provides information on whether crime was committed inside or outside

  • Weapon = Provides details on what weapon has been used, if any

  • Post = Number corresponding to the Police Post concerned. A map with corresponding police posts can be found here: http://moit.baltimorecity.gov/sites/default/files/police_districts_w_posts.pdf?__cf_chl_captcha_tk__=pmd_NhnE710SS8QEWdKOyT5Ug6IJZGoF6iIntFYY30vctes-1634309136-0-gqNtZGzNAxCjcnBszQPl

  • District = Name of the district, regrouping different neighbourhoods. Baltimore is officially divided into nine geographical regions: North, Northeast, East, Southeast, South, Southwest, West, Northwest, and Central.

  • Neighborhood = Name of the neighborhood in which the crime was committed. Most names matches with neighborhood names contained in the dataset about Community Statistical Areas.

  • Latitude = Latitude, Coordinate system: EPSG:4326 WGS 84

  • Longitude = Longitude, Coordinate system: EPSG:4326 WGS 84

  • GeoLocation = Combination of latitude and longitude, Coordinate system: EPSG:4326 WGS

  • Premise = Information on the premise where the crime was committed. One counts more than 120’000 observations in the streets.

crime_data <- read.csv(file = here::here("data/Baltimore_Part1_Crime_data.csv"))

Source of the data set: [https://data.baltimorecity.gov/datasets/part1-crime-data/explore]

2.2 CCTV Data set

This dataset represents closed circuit camera locations capturing activity within 256ft (~2 blocks). It contains 837 observations in total.

  • X = Longitude: Coordinate system: EPSG:3857 WGS 84 / Pseudo-Mercator

  • Y = Latitude: Coordinate system: EPSG:3857 WGS 84 / Pseudo-Mercator

  • OBJECTID = ID of of the camera, 837 in total

  • CAM_NUM = Unique number attributed to the camera. This might suggest that the dataset does not show the location of every camera in Baltimore. Here at this point we want to mentioned that the CAM_NUM column has many zeros, which we couldn’t relate to anything. So we are still in the process of figuring out the exact meaning of that.

  • LOCATION = Textual information on where the camera is located

  • PROJ = Name of the area in which the camera is located. It does not always match the name of the “standard” community statistical areas.

  • XCCORD = Longitude, Coordinate system: EPSG:4326 WGS 84

  • YCOORD = Latitude, Coordinate system: EPSG:4326 WGS 84

cctv_data <- read.csv(file = here::here("data/Baltimore_CCTV_Locations_Crime_Cameras.csv"))

Source of the data set: [https://data.baltimorecity.gov/datasets/cctv-locations-crime-cameras/explore]

2.3 Poverty Data set

This dataset provides information about the percent of family households living below the poverty line. This indicator measures the percentage of households whose income fell below the poverty threshold out of all households in an area.

Federal and state governments use such estimates to allocate funds to local communities. Local communities use these estimates to identify the number of individuals or families eligible for various programs. These information will be useful for us to study the dispersion of CCTVs within Baltimore in comparison to the poverty level in a given area. This dataset contains 55 observations, one percentage for each community statistical area. There seems to only be one NA. The most relevant variables are the following:

  • CSA2010 = name of the community statistical area. The Baltimore Data Collaborative and the Baltimore City Department of Planning divided Baltimore into 55 CSAs. These 55 units combine Census Bureau geographies together in ways that match Baltimore’s understanding of community boundaries, and are used in social planning.

  • hhpov15 - hhpov19 = each these five column contains the percent of Family Households Living Below the Poverty Line for a given year, from 2015 to 2019.

  • Shape_Area - Shape_Length = standard fields to determine the area and the perimeter of a polygon

poverty_data <- read.csv(file = here::here("data/Percent_of_Family_Households_Living_Below_the_Poverty_Line.csv"))

Source of the data set: [https://arcg.is/1qOrnH]

2.4 Area Data set

This dataset provides information about the Community Statistical Area geographies for Baltimore City. Based on aggregations of Census tract (2010) geographies. It will serve as a geographical point of reference for us to match each dataset’s observations together. This dataset contains 55 observations, one for each of area. The most relevant variables are the following:

area_data <- read_csv(file = here::here("data/Community_Statistical_Areas__CSAs___Reference_Boundaries.csv"))

Source of the data set: [https://data.baltimorecity.gov/datasets/community-statistical-area-1/explore?location=39.284605%2C-76.620550%2C12.26]

2.5 Data Wrangling

2.5.1 Data Wrangling: Area

Here main goal is the transformation of the area data into a new dataset, which contains one observation per names of neighborhoods. We achieve that by first creating a new dataset with each neighbourhood being assigned to an area and second establishing a new columns with lower case letter for later merge.

area_data2 <- separate_rows(area_data, Neigh, sep = ", ") #Creation of a new dataset with each neighbourhood being assigned to an area

area_data2 <- mutate(area_data2,neigh=tolower(Neigh)) #Creation of new column with lower case letters

2.5.2 Data Wrangling: Crime

Since in the crime dataset the neighborhood names are written in lower case letters we again create a colums with lower case letters to join the two datasets. Next, we use the anti_join function to understand which observation has not matched. The outcome shows all the neighborhoods which did not match. These include using the names function:

  • mount washington -> Mt. Washington
  • carroll - camden industrial area -> Caroll-Camden Industrial Area
  • patterson park neighborhood -> Patterson Park
  • glenham-belhar -> Glenham-Belford
  • new southwest/mount clare -> Hollins Market
  • mount winans -> Mt.Winans
  • rosemont homeowners/tenants -> Rosemont
  • broening manor -> O’Donnell Heights
  • boyd-booth -> Booth-boyd
  • lower herring run park -> Herring Run Park
  • mt pleasant park -> Mt. Pleasant Park
crime_data <- mutate(crime_data,neigh=tolower(crime_data$Neighborhood)) #Creation of new column with lower case letters

crime_data_with_areas <- crime_data %>% 
  left_join(area_data2,by="neigh") #We create a new data sets that contains the name of the area in which the crime was committed

crime_data_NAs <- crime_data %>% 
  anti_join(area_data2,
            by="neigh") #Here is the list of all the NAs we have

unique(crime_data_NAs$neigh) #We see that we have very few unassigned names, we can change this by hand.

crime_data["neigh"][crime_data["neigh"]=="mount washington"] <- "mt. washington"
crime_data["neigh"][crime_data["neigh"]=="carroll - camden industrial area"] <- "caroll-camden industrial area"
crime_data["neigh"][crime_data["neigh"]=="patterson park neighborhood"] <- "patterson park"
crime_data["neigh"][crime_data["neigh"]=="glenham-belhar"] <- "glenham-belford"
crime_data["neigh"][crime_data["neigh"]=="new southwest/mount clare"] <- "hollins market"
crime_data["neigh"][crime_data["neigh"]=="mount winans"] <- "mt. winans"
crime_data["neigh"][crime_data["neigh"]=="rosemont homeowners/tenants"] <- "rosemont"
crime_data["neigh"][crime_data["neigh"]=="broening manor"] <- "o'donnell heights"
crime_data["neigh"][crime_data["neigh"]=="boyd-booth"] <- "booth-boyd"
crime_data["neigh"][crime_data["neigh"]=="lower herring run park"] <- "herring run park"
crime_data["neigh"][crime_data["neigh"]=="mt pleasant park"] <- "mt. pleasant park"

#We got rid of the 764 remaining observations which had no information about neighbourhood

We get rid of the 764 remaining observations which had no information about neighbourhood. Finally, we use the semi join function to create the final datasets which in total is basically the same dataset as the original one minus the 764 observations.

To see the structure of the dataset we use the str function and filter for the dates from 2000 (since the Baltimore CCTV program started in the year 2000).

crime_data_with_areas <- crime_data %>% 
 semi_join(area_data2,by="neigh") %>% 
  left_join(area_data2,by="neigh") #Here we have the final data frame with a community for each crime

str(crime_data_with_areas) # We see that the crime CrimeDateTime column is not a date. We thus convert it.

crime_data_with_areas$CrimeDateTime <-  as.Date(crime_data_with_areas$CrimeDateTime)

crime_data_with_areas <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2000-01-01")) #We had 24 observations that dates back to before the year 2000 and 24 observation with no date. We only select crime committed after 2000 as the CCTV program in baltimore started in 2000.

2.5.3 Data Wrangling: Poverty

In Baltimore there are 56 areas in the standard community statistical area. However, within these 56 statistical areas is also jail included. For the poverty data however, we obviously have only 55 statistical areas given, since we obviously do not have data about poverty in jail. To solve this dissonanz, we add a new line. Moreover we needed to fill a missing value for Baltimore in the year 2019: Here we took the average of the past years.

poverty_data <- rbind(poverty_data,list(56,"Unassigned -- Jail",0,0,0,0,0,0,0))

poverty_data[48,7] <- c(poverty_data[48,3],poverty_data[48,4],poverty_data[48,5],poverty_data[48,6]) %>% mean() #The poverty rate of South Baltimore in 19 was missing. This area's rate over the past years seems to be stable (always one of the richest area), that's why we compute the mean of the past 4 years to replace the missing value.

2.5.4 Data Wrangling: CCTV

Here we need to make sure to not have any missing values in the CCTV dataset.

which(is.na(cctv_data$X))
which(is.na(cctv_data$Y))
filter(cctv_data, X=="")
filter(cctv_data, Y=="") 

#We are not sure it is the proper technique but by doing so we ensure that we have no NAs neither empty values and so that our data set is tidy.
  • Sources
  • Description
  • Wrangling/cleaning
  • Spotting mistakes and missing data (could be part of EDA too)
  • Listing anomalies and outliers (could be part of EDA too)

Exploratory data analysis

3.1 Calculation of the density of CCTV per community

The original CCTV dataset which we observed had a slight challenge. Although it contained the neighborhood names were not matching the standard neighborhood names. Concludingly, to solve that we involved geospatial counting.

Our procedure included the following steps. After reading the table and converting the data into a data table, we define what will be the coordinates in the datasets. Here we have several types of coordinates, and we use x and Y. Those coordinates files have an special object included called crs. Crs is basically the coordinate system which is used. We continue by defining an object crs.geo1 as being the coordinate system which is being used for all our files. Next, we have the proj4string function, to which we assign this crs.geo1 data.

#read in data table
balt_dat <-  fread(file = here::here("data/Baltimore_CCTV_Locations_Crime_Cameras.csv"))

#convert to data table
balt_dat <- as.data.table(balt_dat)

#make data spatial
coordinates(balt_dat) <-  c("X","Y")
crs.geo1 <-  CRS("+proj=merc +a=6378137 +b=6378137 +lat_ts=0 +lon_0=0 +x_0=0 +y_0=0 +k=1 +units=m +nadgrids=@null +wktext +no_defs +type=crs")
proj4string(balt_dat) <-  crs.geo1  

Then we plot to see the output (as cloud of points which represent all the CCTVs).

plot(balt_dat, pch = 20, col = "steelblue") #We can use the plot function to quickly plot the SpatialPointDataFrame that we created. We see a bunch of points which represent the CCTV location in Baltimore.

Next, we have to work with the shapefile which is another special file. Basically it is a set of polygons which represent different areas of the city Baltimore. We downloaded this file on the Open Baltimore Portal, read it in and assign this file again to our crs.geo1 coordinate system. In this way we have assured that our files have the same coordinate system.

#read in shapefile of baltimore
baltimore <-  readOGR(dsn = here::here("data/Community_Statistical_Area"), layer = "Community_Statistical_Area") #name of file and object
proj4string(baltimore) <- crs.geo1

Again we plot the results.

#plot
plot(baltimore,main="Spread of CCTVs in different communities of Baltimore")
plot(balt_dat,pch=20, col="steelblue" , add=TRUE) #If we plot these two lines together, what we obtain is a map od baltimore, we the 56 community statistical areas and the CCTVs on top of the map.

To illustrate these results verbally, we need R to count for us how many CCTV belongs to which area. Here, the function over counts how many CCTVs are layed over a certain polygon frame. Next, we create a new object called counts, make it into a dataframe (so that it is easier for us to work with it) and using the sum(countsFreq) to ensure that we have 56 observations in total. From the results we see that we have 41 observations, so there are only 41 out of 56 areas where there are some CCTV.

#Perform the count
proj4string(balt_dat)
proj4string(baltimore) #To be able to perform the count, we must ensure that the two spatial files have a similar CRS. This is the case as we attributed these two files "crs.geo1" 

res2 <- over(balt_dat,baltimore) #This function tells you to which community each CCTV belongs to
counts <- table(res2$community)
counts <- as.data.frame(counts)
colnames(counts)[1] <- "Community"
sum(counts$Freq) #We see that we have 836 observation in total, this is a good sign as our initial CCTV data set contained 836 obesrvations

To make that workable, we need to create a new CCTV file, from which we just add 0 to each N.A.-location. Lastly, we create a new column with the mutate function to calculate the CCTV-density which shows the amount of CCTV per area divided by the total amount of CCTV.

CCTV_per_area <- area_data[2] %>% 
  left_join(counts,by="Community") #One must add the communities where there are no counts i.e no CCTV

CCTV_per_area[is.na(CCTV_per_area)] <- 0

CCTV_per_area <- mutate(CCTV_per_area, density_perc=(CCTV_per_area$Freq/(sum(CCTV_per_area$Freq)))*100)

3.1.1 Mapping of CCTV density

Here we use the piping operator to ensure that the community that we have in the Baltimore dataset are the same as the one we are having in the CCTV per are dataset. As this only returns true values that means that it works and is good for further analysis.

library(tmap)
baltimore$community %in% CCTV_per_area$Community

Next, we perform a left_join between the Baltimore dataset and the CCTV per are. To hedge against the different writing styles (one time it is written with a capital letter and one time with a small letter), we use the vector in the end. Finally, we create the map with the tmap package. The tmap package somehoe works as the ggplot2 package: First, we need to define an element, it always starts with the tm_shape argument, and then you can add with the plus operator the as many argument as you wish. We used the Baltimore-datasets, filled it with the density percentage, defined some breaks, set the borders and the finally the layout.

baltimore@data <- left_join(baltimore@data, CCTV_per_area, by = c('community' = 'Community'))

CCTV_dens_map <- tm_shape(baltimore) + tm_fill(col = "density_perc", title ="CCTV density per Area in %", breaks=c(0,1,2,3,4,5,6,7,8,9,10,11)) + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)

3.2 Calculation of the crime rate per community

What we create is the crime_rate_per_area. To achieve that we grouo and summarize the crime data per community which enables us to compute the crime rate per area for each area. Again, we added one more row in the calculations because we have no values for the prison. Again, we ensured us by adding up everything to that it equates 100, which seems to help us go further confidently.

CrimeRatePerArea <- crime_data_with_areas %>% 
  group_by(Community) %>%
  summarize(CrimeRatePerArea=(n()/nrow(crime_data_with_areas))*100)

CrimeRatePerArea <- rbind(CrimeRatePerArea,list("Unassigned -- Jail",0)) #We have no information about crimes committed in jail, yet, the community statistical area encompass 56 area, including jail. In order to ensure consistency, we must add a 56th observation in this data frame.

sum(CrimeRatePerArea$CrimeRatePerArea) #The total sum is 100, which is what we expect

3.2.1 Mapping of crime rates

Again, we map the crimes similarily to the section of mapping the CCTVs.

library(tmap)

baltimore$community %in% CrimeRatePerArea$Community #We see that we have a perfect match

baltimore@data <- left_join(baltimore@data, CrimeRatePerArea, by = c('community' = 'Community'))

Crime_map <- tm_shape(baltimore) + tm_fill(col = "CrimeRatePerArea", title ="Crime rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)

Crime_map

3.2.2 Creation of a distorted map

Again, we use the tmap package together with the cartogram ncont function which basically distort the map to show our results. Concretely, we want to show that the crime rate is higher in the city center. This can be shown quite neatly graphically.

Distorted_Crime_map <- tm_shape(cartogram_ncont(baltimore, "CrimeRatePerArea"))+tm_fill(col = "CrimeRatePerArea", title ="Crime rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.07) #This map distorts the size of each area depending on their respective crime rates. It is interesting as it enables one to see that higher crime rates tends to be concentrated in the city center.

Distorted_Crime_map

3.3 Calculation of crime rates by type of crime

First thing we do here is to compute the unique values of the description column of the crime date with the area-dataset. We see that we have 14 types of crime. We want to observe crimes by types, therefore we want to make new classifications. The law consists of three basic classifications of criminal offenses including infractions, misdemeanors, and felonies. In our data set, we have no (?) infractions.

  • Misdemeanor: LARCENY FROM AUTO,COMMON ASSAULT, ROBBERY - COMMERCIAL, LARCENY
  • Felony: RAPE, ARSON, HOMICIDE, BURGLARY, AUTO THEFT, ROBBERY - CARJACKING, AGG. ASSAULT, ROBBERY - STREET, ROBBERY - RESIDENCE, SHOOTING
unique(crime_data_with_areas$Description)

#We see that we have 14 types of crime. We want to observe crimes by types, therefore we want to make new classifications.The law consists of three basic classifications of criminal offenses including infractions, misdemeanors, and felonies. In our data set, we have infractions.

#Misdemeanor:LARCENY FROM AUTO,COMMON ASSAULT, ROBBERY - COMMERCIAL, LARCENY
#Felony: RAPE, ARSON, HOMICIDE, BURGLARY, AUTO THEFT, ROBBERY - CARJACKING, AGG. ASSAULT, ROBBERY - STREET, ROBBERY - RESIDENCE, SHOOTING

Next we create a dataset which is called crime_cat and basically tells you which recorded crime type belongs to which crime category. This dataset will be used later to make a left joint with the crime_data_per_area. Finally, we are left with the crime datasets with the area dataset with a new colums which concerns whether the crime was a felony or a misdemeanor.

crime_cat <- data.frame(Category=c("Misdemeanor","Felony"), Description=c(c("LARCENY FROM AUTO,COMMON ASSAULT,ROBBERY - COMMERCIAL,LARCENY"),c("RAPE,ARSON,HOMICIDE,BURGLARY,AUTO THEFT,ROBBERY - CARJACKING,AGG. ASSAULT,ROBBERY - STREET,ROBBERY - RESIDENCE,SHOOTING")))

crime_cat <- separate_rows(crime_cat, Description, sep = ",")

crime_cat$Description %in% unique(crime_data_with_areas$Description) #Ensure we have a perfect match

crime_data_with_areas <- crime_data_with_areas %>% 
  left_join(crime_cat,by="Description") #We had a new variable to our crime data set

Next, we compute the Crime_PerCategory_PerArea. Here we use the piping operator and this time we group_by the community and category and obtain the results. Again, we check that we indeed have 349482 observations. Moreover, from that we compute some felonystats and misdemeanorstats by (again) adding the prison line into the dataset.

CrimePerCategoryPerArea <- crime_data_with_areas %>% 
  group_by(Community,Category) %>%
  summarize(RepartitionPerCategoryPerArea=n())

sum(CrimePerCategoryPerArea$RepartitionPerCategoryPerArea) #Again, we check that we indeed have 349482 observations

CrimeCategoryRepartition <- CrimePerCategoryPerArea %>% 
  group_by(Category) %>% 
  summarise(Repartition=sum(RepartitionPerCategoryPerArea)) #We observe that in Baltimore, the number of felony is close to the number of misdemeanor

FelonyStats <-  CrimePerCategoryPerArea %>% filter(Category=="Felony") %>% 
  mutate(FelonyRatePerArea = (RepartitionPerCategoryPerArea/CrimeCategoryRepartition$Repartition[1])*100)

FelonyStats[56,] <- list("Unassigned -- Jail","Felony",0,0)

MisdemeanorStats <-  CrimePerCategoryPerArea %>% filter(Category=="Misdemeanor") %>% 
  mutate(MisdemeanorRatePerArea = (RepartitionPerCategoryPerArea/CrimeCategoryRepartition$Repartition[2])*100)

MisdemeanorStats[56,] <- list("Unassigned -- Jail","Misdemeanor",0,0)

3.3.1 Mapping of felonies and Misdemeanors

After ensuring that we have a perfect match we perform a left joint and for felony and misdemeanor and map everything.

#Felony

baltimore$community %in% FelonyStats$Community

baltimore@data <- left_join(baltimore@data, FelonyStats, by = c('community' = 'Community'))

Felony_map <- tm_shape(baltimore) + tm_fill(col = "FelonyRatePerArea", title ="Felony rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)

Felony_map

#Misdemeanor

baltimore$community %in% MisdemeanorStats$Community

baltimore@data <- left_join(baltimore@data, MisdemeanorStats, by = c('community' = 'Community'))

Misdemeanor_map <- tm_shape(baltimore) + tm_fill(col = "MisdemeanorRatePerArea", title ="Misdemeanor rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)

Misdemeanor_map

3.4 Calculation of crime evolution

The idea is that we want the information about how crime evolved. Here we could have done a loop, however we have created a dataset for each year. The results are interesting. If we compare how many observations we have in each crime-per year datasets, we see that we have ~40.000ish cases a year exept from 2020 (which is due to COVID) and the year 2021 (which is not finished. We dont make any datasets for the year 2013 and below, because we see that we have not many observations which date prior to the year 2013.

Crime_in_2021 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2021-01-01") & CrimeDateTime <= as.Date("2021-12-31"))

Crime_in_2020 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2020-01-01") & CrimeDateTime <= as.Date("2020-12-31"))

Crime_in_2019 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2019-01-01") & CrimeDateTime <= as.Date("2019-12-31"))

Crime_in_2018 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2018-01-01") & CrimeDateTime <= as.Date("2018-12-31"))

Crime_in_2017 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2017-01-01") & CrimeDateTime <= as.Date("2017-12-31"))

Crime_in_2016 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2016-01-01") & CrimeDateTime <= as.Date("2016-12-31"))

Crime_in_2015 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2015-01-01") & CrimeDateTime <= as.Date("2015-12-31"))

Crime_in_2014 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2014-01-01") & CrimeDateTime <= as.Date("2014-12-31"))

crime_data_with_areas %>%  filter(CrimeDateTime < as.Date("2014-01-01")) #We see that we have very few (76) observations before 2014, thus we do not consider them

Next, we calculate the crime rates for each year with the piping operator, grouping by community and summarize the rates. In the end we create the crime evolution datasets which is a combination of all the data.

#_____ Calculations of the crime rates

CrimeRatePerArea2021 <- Crime_in_2021 %>% 
  group_by(Community) %>%
  summarize(CrimeRatePerArea2021=(n()/nrow(Crime_in_2021))*100)

CrimeRatePerArea2021 <- rbind(CrimeRatePerArea2021,list("Unassigned -- Jail",0))

CrimeRatePerArea2020 <- Crime_in_2020 %>% 
  group_by(Community) %>%
  summarize(CrimeRatePerArea2020=(n()/nrow(Crime_in_2020))*100)

CrimeRatePerArea2020 <- rbind(CrimeRatePerArea2020,list("Unassigned -- Jail",0))

CrimeRatePerArea2019 <- Crime_in_2019 %>% 
  group_by(Community) %>%
  summarize(CrimeRatePerArea2019=(n()/nrow(Crime_in_2019))*100)

CrimeRatePerArea2019 <- rbind(CrimeRatePerArea2019,list("Unassigned -- Jail",0))

CrimeRatePerArea2018 <- Crime_in_2018 %>% 
  group_by(Community) %>%
  summarize(CrimeRatePerArea2018=(n()/nrow(Crime_in_2018))*100)

CrimeRatePerArea2018 <- rbind(CrimeRatePerArea2018,list("Unassigned -- Jail",0))

CrimeRatePerArea2017 <- Crime_in_2017 %>% 
  group_by(Community) %>%
  summarize(CrimeRatePerArea2017=(n()/nrow(Crime_in_2017))*100)

CrimeRatePerArea2017 <- rbind(CrimeRatePerArea2017,list("Unassigned -- Jail",0))

CrimeRatePerArea2016 <- Crime_in_2016 %>% 
  group_by(Community) %>%
  summarize(CrimeRatePerArea2016=(n()/nrow(Crime_in_2016))*100)

CrimeRatePerArea2016 <- rbind(CrimeRatePerArea2016,list("Unassigned -- Jail",0))

CrimeRatePerArea2015 <- Crime_in_2015 %>% 
  group_by(Community) %>%
  summarize(CrimeRatePerArea2015=(n()/nrow(Crime_in_2015))*100)

CrimeRatePerArea2015 <- rbind(CrimeRatePerArea2015,list("Unassigned -- Jail",0))

CrimeRatePerArea2014 <- Crime_in_2014 %>% 
  group_by(Community) %>%
  summarize(CrimeRatePerArea2014=(n()/nrow(Crime_in_2014))*100)

CrimeRatePerArea2014 <- rbind(CrimeRatePerArea2014,list("Unassigned -- Jail",0))


crime_evolution <- CrimeRatePerArea2021 %>% 
  left_join(CrimeRatePerArea2020,by="Community") %>% 
  left_join(CrimeRatePerArea2019,by="Community") %>%
  left_join(CrimeRatePerArea2018,by="Community") %>%
  left_join(CrimeRatePerArea2017,by="Community") %>% 
  left_join(CrimeRatePerArea2016,by="Community") %>% 
  left_join(CrimeRatePerArea2015,by="Community") %>% 
  left_join(CrimeRatePerArea2014,by="Community")
  • Mapping out the underlying structure
  • Identifying the most important variables
  • Univariate visualizations
  • Multivariate visualizations
  • Summary tables

Analysis

4.1 Crime VS CCTVs - Does the presence of CCTV deter crime?

First, we create a CCTV_VS_crimes dataset (which basically is a left joint). Next, we are able to plot visually compare the crime rate per area with the density percentage per area. Here we see that there seems to be a trend.

CCTV_VS_crimes <- CCTV_per_area %>% 
  left_join(CrimeRatePerArea,by="Community")
  
View(CCTV_VS_crimes)

plot(CCTV_VS_crimes$CrimeRatePerArea,CCTV_VS_crimes$density_perc, main="Crime Rate per Community VS CCTV Density per Community",xlab="CrimeRatePerCommunity",ylab="CCTVDensityPerCommunity")

regression <- lm(CCTV_VS_crimes$density_perc~CCTV_VS_crimes$CrimeRatePerArea)
summary(regression)

y<-regression[["coefficients"]][["(Intercept)"]]
x<-regression[["coefficients"]][["CCTV_VS_crimes$CrimeRatePerArea"]]

range <- seq(from=0, to=4.5, by=0.1)

estimation <- x*range+y

lines(range,estimation, col="blue")

In order to confirm that perform a regression with the lm-function and call for a summary of the function. Next, the x and y are computed, which are basically the coefficients. Next, we use a trick by creating a range from 0 to 4.5 (because the plot goes ~ from zero to 4.5.) and create a vector. This vector is called estimation. This vector is basically the linear function. So this is the coefficient multiplied by each value in the range plus the intercept. As a result we get the fitted value. The fitted value contains the estimation. After taht we plot the estimation.

In the summary of the regression we see that \(R^2\) (which is the estimator of the goodness of the fit) is pretty poor but still there seems to be a tendency.

4.1.1 Mapping of CCTVs and crime, felony and misdemeanor rates

In these section we engage with the mapping of the CCTVs and crimes. The method is the same as before with the tmap-package. However, this time we have two different shapes in tm_shape(Baltimore) and tm_shape(balt_dat) which adds the maps togehter (as in ggplot) and over each other. If we take a look at this data we see that it gives an intuition about the data. It seems as if where crime rates are the lowest, there seems to be less CCTVs (for instance in the north area of the city or even in the western CCTVs). There seems to be a correlation between the dark red areas and the CCTV density per area.

Crime_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "CrimeRatePerArea", title ="Crime rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)+ tm_shape(balt_dat) + tm_dots(col="black")

Felony_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "FelonyRatePerArea", title ="Felony rate per Area in %", style = "quantile") + tm_borders(col="black",alpha=0.3)+ tm_layout(inner.margins = 0.05) + tm_shape(balt_dat) + tm_dots(col="black")

Misdemeanor_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "MisdemeanorRatePerArea", title ="Misdemeanor rate per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3)+ tm_layout(inner.margins = 0.05) + tm_shape(balt_dat) + tm_dots(col="black")


tmap_mode("view") #Use this command to have interactive maps

baltimore@data[["fid"]]<-baltimore@data[["community"]] #We do that so that we see the name of the Community when using an interactive map

tmap_arrange(Crime_and_CCTV_map,Felony_and_CCTV_map,Misdemeanor_and_CCTV_map)























Here we sorted the crime rate per area to find out that the range was. Either using breaks or the automatic style with the quantiles.

sort(baltimore@data[["CrimeRatePerArea"]])

breaks1 <- c(0,0.5,1,1.5,2,2.5,3,3.5,4,4.5) #Not sure what break to use, for the moment I decided to use the automatic break system with the "quantile" parameter

tmap_mode("plot") #We go back to classic plotting

4.1.2 Analysis of where crime took place: August 2021

We are trying to see whether the presence of CCTV can deter crime. Here we first try to answer the question where the crime took place (especially in August 2021). We choose AUgust 2021 because it is the latest full month which we have in our dataset. Taking the latest timepoints from the data assures us that most of the CCTVs presented in the dataset were already there (since we have no information of when exactly these CCTVs were added). Again, as before, we create a data table, assign coordinates, define CRS (in this case the CRS is “EPDS4326”, which we needed to transform).Again, mapping with tm_shape to get the results. The output shows where crime takes place compared to the CCTV location.

crime_spatial <- as.data.table(crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2021-08-01") & CrimeDateTime <= as.Date("2021-08-31")))
coordinates(crime_spatial) <-  c("Longitude","Latitude")
proj4string(crime_spatial) <-  CRS("+init=epsg:4326")
crime_spatial <- spTransform(crime_spatial,crs.geo1)

August21Crimes_VS_CCTV <- tm_shape(baltimore) + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.1, title="Crimes committed in August 2021 VS CCTV location",frame.lwd = 5)+ tm_shape(balt_dat) + tm_dots(col="black")+tm_shape(crime_spatial)+tm_dots(col="red",alpha=0.5)

#It could be interesting to see where crime took place relative to CCTV locations in the area with the highest crime rate in August 2021

tmap_mode("view") #Use this command to have interactive maps
August21Crimes_VS_CCTV

Next, we decided to calculate the crime rate for August per area to see where crime was highest. The results show that they are in midtown (are midtown). So following this we will take a closer look at the Midtown area.

CrimeRatePerAreaAugust2021 <- crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2021-08-01") & CrimeDateTime <= as.Date("2021-08-31")) %>% 
  group_by(Community) %>%
  summarize(CrimeRatePerArea=(n()/nrow(crime_data_with_areas %>%  filter(CrimeDateTime >= as.Date("2021-08-01") & CrimeDateTime <= as.Date("2021-08-31"))))*100) #We see that midtown is the area with the highest crime rate in August 2021, we might want to focus on that area and see whether there is crime that take place directly next to CCTVs

sum(CrimeRatePerAreaAugust2021$CrimeRatePerArea) #We obtain 100, which is what we expect

By using the st-bbox function with their values, which represent the most extreme values on the x-axis and y-axis we can continue with the analysis. So those four values are some geographical values based on coordinate system until we used now (Pseudomercator). And this function assigned that rectangle to be shape file in itself. Now, when we create the midtown map, we can use it as an argument in the tm_shape. THe output is a zoom and the bigger map with a rectangle over the area which we are looking at and analysing.

Midtown_area <-  st_bbox(c(xmin = -8531200.16, xmax = -8527039.15,
                      ymin =4763782.86, ymax = 4766467.70),
                    crs = st_crs(baltimore)) %>% st_as_sfc()
 
Midtown_map <- tm_shape(Midtown_area) + tm_borders(col="white")+ tm_shape(baltimore) + tm_borders(col="black") + tm_layout(inner.margins = 0.05,frame.lwd = 5,title = "Zoom on Midtown Area",title.position = c('left', 'top'))+tm_scale_bar(position = c("left", "top"))+ tm_shape(balt_dat) + tm_symbols(shape = 2, col = "black", size = 0.07)+tm_shape(crime_spatial)+tm_dots(col="red")

Baltimore_map_2 <- tm_shape(baltimore) + tm_borders()+ tm_shape(Midtown_area) + tm_borders(lwd = 1.5,col = "red") + tm_layout(frame.lwd = 6,inner.margins = 0.05)

tmap_mode("plot")
Midtown_map
print(Baltimore_map_2, vp = viewport(0.8, 0.27, width = 0.5, height = 0.5)) #By running these two lines together, we obtain the map with an additional overview

4.1.3 Anomaly #1 : Prison

Similar to what we have done before, we first define of what the area of the prison is. It is interesting to analyse the prison and its sourrounding area, since we have many CCTVs around it but basically no crime around it (so it represents an outlier).

tmap_mode("plot")

Prison_area <-  st_bbox(c(xmin = -8529169.92, xmax = -8526465.97,
                      ymin =4764196.55, ymax = 4765056.50),
                    crs = st_crs(baltimore)) %>% st_as_sfc()
 
Prison_map <- tm_shape(Prison_area) + tm_borders(col="black",alpha=0.3)+ tm_shape(baltimore) + tm_fill(col = "CrimeRatePerArea", title ="Crime rate per Area in %",style = "quantile") + tm_borders(col="black") + tm_layout(inner.margins = 0.05,frame.lwd = 5,title = "Zoom on Baltimore Prison",title.position = c('left', 'top'))+tm_scale_bar(position = c("left", "top"))+ tm_shape(balt_dat) + tm_dots(col="black") #This map zooms on the prison. This "Area" is special. We have no data on crime there, we can also see that the there is a huge concentration of CCTVs directly next to the prison.


Baltimore_map <- tm_shape(baltimore) + tm_borders()+ tm_shape(Prison_area) + tm_borders(lwd = 3,col = "red") + tm_layout(frame.lwd = 6,inner.margins = 0.05)


Prison_map
print(Baltimore_map, vp = viewport(0.8, 0.27, width = 0.5, height = 0.5)) #By running these two lines together, we obtain 

4.2 CCTVs VS Felonies and Misdemeanors - What types of crimes may be deterred by surveillance cameras?

This is exactly the same what we have done for 4.1.. What we want to see whether there is a difference in terms of correlation between felony & misdemeanor and crime. The results shows a weak \(R^2\), and the answer get even worse in terms of correlation graphically. Here, we do not see bigger impact on crime types of CCTV installation.

#Felonies

plot(FelonyStats$FelonyRatePerArea,CCTV_VS_crimes$density_perc, main="Felony Rate per Community VS CCTV Density per Community",xlab="FelonyRatePerCommunity",ylab="CCTVDensityPerCommunity")

regression5 <- lm(CCTV_VS_crimes$density_perc~FelonyStats$FelonyRatePerArea)
summary(regression5) #There is a very poor correlation 

y5<-regression5[["coefficients"]][["(Intercept)"]]
x5<-regression5[["coefficients"]][["FelonyStats$FelonyRatePerArea"]]

range5 <- seq(from=-1, to=5, by=0.1)

estimation5 <- x5*range5+y5

lines(range5,estimation5, col="blue") 

#Misdemeanors

plot(MisdemeanorStats$MisdemeanorRatePerArea,CCTV_VS_crimes$density_perc, main="Misdemeanor Rate per Community VS CCTV Density per Community",xlab="MisdemeanorRatePerCommunity",ylab="CCTVDensityPerCommunity")

regression6 <- lm(CCTV_VS_crimes$density_perc~MisdemeanorStats$MisdemeanorRatePerArea)
summary(regression6) #There is a very poor correlation 

y6<-regression6[["coefficients"]][["(Intercept)"]]
x6<-regression6[["coefficients"]][["MisdemeanorStats$MisdemeanorRatePerArea"]]

range6 <- seq(from=-1, to=6, by=0.1)

estimation6 <- x6*range6+y6

lines(range6,estimation6, col="blue")

4.3 Comparison of CCTV density and wealth

We went to see whether there is a correlation between CCTV density and wealth. So, similarily, we perform a regression. The results here are not so conclusive, since we have a poor \(adjusted R^2\) and a poor \(R^2\). Visually we can see intersting patterns. If we look at the map we see that at least those areas with no CCTVs are more likely to be quite wealthy. However, we are not sure whether this is the only dependeny here, thus we think it is rather correlated to the crime rate in these areas. Again, in the northern parts we see less CCTV, less crime, and also more wealthier population.

plot(CCTV_per_area$density_perc,poverty_data$hhpov19, main="CCTV density VS poverty rate",xlab="CCTV Density",ylab="Poverty rate")

regression2 <- lm(poverty_data$hhpov19~CCTV_per_area$density_perc)
summary(regression2)

y2<-regression2[["coefficients"]][["(Intercept)"]]
x2<-regression2[["coefficients"]][["CCTV_per_area$density_perc"]]

range2 <- seq(from=-5, to=12, by=0.2)

estimation2 <- x2*range2+y2

lines(range2,estimation2, col="blue")

4.3.1 Mapping of CCTVs and wealth

We see a tendency. There seems to be more crimes in poorer areas.

baltimore$community %in% poverty_data$CSA2010 #We see that we have a perfect match
#>  [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> [14] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> [27] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> [40] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
#> [53] TRUE TRUE TRUE TRUE
baltimore@data <- left_join(baltimore@data, poverty_data, by = c('community' = 'CSA2010'))

Wealth_and_CCTV_map <- tm_shape(baltimore)+tm_fill(col = "hhpov19", title ="Poverty rate per Area",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)+ tm_shape(balt_dat) + tm_dots(col="black") #This visualization is particularly interesting. Indeed, while the regression between crime rates and CCTV density was not very conclusive (we obtained a very poor R squared suggesting a poor correlation between poverty and CCTV density), this map might actually explain why we obtained such result and give us interesting hints. Indeed, we see that many CCTVs are located in Downtown/Seton Hill and in Inner Harbor/Federal Hill, namely in the centre of Baltimore and that the poverty rates is very low in these two regions. This makes sense as centers of city are typically not the poorest areas. Yet, what we also see is that many of the least poor region in the "suburbs" often have very low CCTV densities. This might confirm the idea that there tends to be less CCTVs in richer neighborhood.

tmap_mode("view")
Wealth_and_CCTV_map

4.4 Comparison of crimes and wealth

The idea is to find areas which were impacted by a certain type of crime. We see that there was a mainly equal distribution between felony and misdemeanor. We seem to have a strong tendency in downtown - which is at the same time one of the richest areas in Baltimore. Here, we have not enough information to draw conclusions but it could be that there are less felony crimes in this area.

plot(CrimeRatePerArea$CrimeRatePerArea,poverty_data$hhpov19, main="Crime Rate VS poverty rate",xlab="Crime rate",ylab="Poverty rate")

regression3 <- lm(poverty_data$hhpov19~CrimeRatePerArea$CrimeRatePerArea)
summary(regression3)

y3<-regression3[["coefficients"]][["(Intercept)"]]
x3<-regression3[["coefficients"]][["CrimeRatePerArea$CrimeRatePerArea"]]

range3 <- seq(from=-2, to=4.5, by=0.1)

estimation3 <- x3*range3+y3

lines(range3,estimation3, col="blue")

4.5 Felonies VS Misdemeanors - Do we have an equal crime type distribution?

plot(FelonyStats$FelonyRatePerArea,MisdemeanorStats$MisdemeanorRatePerArea,main="Felony VS Misdemeanor", xlab="Felony",ylab="Misdemeanor") #This allows us to see whether Felony and Misdemeanors are correlated. This seems to be the case

regression4 <- lm(FelonyStats$FelonyRatePerArea~MisdemeanorStats$MisdemeanorRatePerArea)
summary(regression4)

y4<-regression4[["coefficients"]][["(Intercept)"]]
x4<-regression4[["coefficients"]][["MisdemeanorStats$MisdemeanorRatePerArea"]]

range4 <- seq(from=-1, to=5, by=0.1)

estimation4 <- x4*range4+y4

lines(range4,estimation4, col="blue")

#Still, we see that there seems ot be some outliers. I don't know if it is relevant but the biggest outlier is Downtown/Seton Hill, is turns out it also is one of the richest area in Baltimore.

4.6

  • Answers to the research questions
  • Different methods considered
  • Competing approaches
  • Justifications

Conclusion

  • Take home message
  • Limitations
  • Future work?